Goto

Collaborating Authors

 Changzhou


Self-Retrieval: End-to-End InformationRetrieval withOneLargeLanguageModel

Neural Information Processing Systems

The rise of large language models (LLMs) has significantly transformed both the construction and application of information retrieval (IR) systems. However, current interactions between IR systems and LLMs remain limited, with LLMs merely serving as part of components within IR systems, and IR systems being constructed independently of LLMs. This separated architecture restricts knowledge sharing and deep collaboration between them. In this paper, we introduce Self-Retrieval, a novel end-to-end LLM-driven information retrieval architecture.


Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models Jiaqi Li

Neural Information Processing Systems

Machine unlearning (MU) empowers individuals with the'right to be forgotten' by removing their private or sensitive information encoded in machine learning models. However, it remains uncertain whether MU can be effectively applied to Multimodal Large Language Models (MLLMs), particularly in scenarios of forgetting the leaked visual data of concepts.




UnsupervisedGraphNeuralArchitectureSearch withDisentangledSelf-supervision

Neural Information Processing Systems

The existing graph neural architecture search (GNAS) methods heavily rely on supervised labels during the search process, failing to handle ubiquitous scenarios where supervisions are not available. In this paper, we study the problem of unsupervised graph neural architecture search, which remains unexplored inthe literature. The key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors andtheoptimal neural architectures.



UnderstandingDiffusionObjectivesastheELBO withSimpleDataAugmentation

Neural Information Processing Systems

To achieve the highest perceptual quality, state-of-the-art diffusion models are optimized with objectives that typically look very different from the maximum likelihood andtheEvidence LowerBound (ELBO) objectives.